Skip to main content

Running a Prompt

After configuring the parameters, users can run the selected prompt and observe its real-time output. This interactive process allows for quick feedback and adjustments, enabling users to refine both their input and the AI's response to achieve the desired outcomes.

Example: Running an Audio Diarization Prompt

For instance, when using the Audio Diarization prompt, users can input data, such as an audio file or transcript. The AI model processes the input and generates a detailed response, like transcriptions with timestamps and speaker separation.

  • User Input: Users provide input data, such as audio files, code, or text. For Audio Diarization, this input could be an audio file or a transcript of spoken content.
  • Model Output: The AI model processes the input and generates a response. In this example, the output might include a transcription of the audio with timestamped text and speaker separation, helping to distinguish between speakers and their speech timings.

The interaction is displayed in a dialogue format with clearly marked User inputs and Model outputs, providing an easy-to-follow conversation flow.


Adjusting Parameters and Re-Running the Prompt

Once the initial run is completed, users can refine the output by adjusting the prompt's parameters. These settings are available on the Run Settings panel, typically displayed on the right side of the screen.

Key Adjustable Parameters:

  • Token Count: This parameter controls the total number of tokens (words/characters) used in the response. It limits the length of the output to ensure it is neither too long nor too short.
  • Temperature: The Temperature setting controls the creativity or determinism of the AI's responses. Higher values (e.g., 0.8) result in more varied, creative outputs, while lower values (e.g., 0.2) generate more focused, predictable responses.
  • JSON Mode: Enable or disable JSON Mode for structured input-output exchanges. This is especially helpful when working with APIs or other systems that require structured data formats.

After making adjustments, users can click the Run button to re-execute the prompt. The system will reprocess the input according to the new settings, and the output will be updated in real-time, allowing users to see how the changes affect the result.


Saving the Conversation

Once you’ve run the prompt and reviewed the generated output, you can save a copy of the conversation for future reference or analysis. This option is useful for tracking input-output changes, sharing findings with team members, or reusing results in other workflows.

Saving Options:

  • Save a Copy: This feature allows users to save the entire conversation, including both the input and output, for later use. This is useful for documentation or further analysis.
  • Get a Code: Clicking the Get a Code button generates a unique code representing the current prompt configuration. You can use this code to quickly reference or reuse the same setup in future sessions, ensuring consistency and saving time.

By saving your outputs, you can build a library of reusable prompts that can be fine-tuned and adapted to various tasks or scenarios.


Best Practices for Running Prompts

To optimize your results when running prompts, follow these best practices:

  • Test Different Parameter Settings: Experiment with various values for Temperature and Token Count to determine the most suitable configuration for your task.
  • Iterate and Refine: Don't hesitate to make incremental adjustments and re-run the prompt multiple times. This iterative approach helps fine-tune the output and achieve more accurate and refined results.
  • Provide Clear, Descriptive Inputs: Clear, specific, and well-defined input will guide the AI model in generating more accurate, relevant, and high-quality responses.

By adhering to these best practices, you can maximize the potential of the Prompt Gallery, leveraging AI-driven automation and analysis to enhance your workflows and project outcomes.